Skip to content

fix(cli-kit): batch large output chunks to prevent event loop blocking#7008

Open
alfonso-noriega wants to merge 3 commits intomainfrom
fix-stack-trace-overload
Open

fix(cli-kit): batch large output chunks to prevent event loop blocking#7008
alfonso-noriega wants to merge 3 commits intomainfrom
fix-stack-trace-overload

Conversation

@alfonso-noriega
Copy link
Contributor

@alfonso-noriega alfonso-noriega commented Mar 13, 2026

Problem

When a POS UI extension throws a large stack trace, the entire output arrives as a single write() call to ConcurrentOutput's Writable stream. The synchronous processing of large chunks — stripAnsi(), split(/\n/), and setProcessOutput with a spread of the full accumulated array — causes a render cycle expensive enough to block the Node.js event loop. During that time, Ink's useInput hook cannot process keypresses, so q (quit) and p (preview) become unresponsive.

Fix

The writableStream write handler now uses a two-stage pipeline: a Transform stream that splits large chunks into batches of MAX_LINES_PER_BATCH (20) lines, followed by a Writable sink that renders each batch. For batches from large chunks, setImmediate(next) yields to the event loop between renders so Ink's input handling can run.

  • Chunks ≤ 20 lines: unchanged behavior, next() called synchronously
  • Chunks > 20 lines: split into 20-line batches with setImmediate scheduling between each batch

The Transform stream preserves the existing outputContextStore context (prefix and stripAnsi overrides) and maintains proper Node.js backpressure by deferring next() until each batch is processed.

Test

Added a test that writes 250 lines as a single chunk and verifies all lines appear in the rendered output without being dropped during batch processing.

When a POS UI extension throws a large stack trace (3MB+), all lines
arrive as a single write to ConcurrentOutput's Writable stream. The
synchronous stripAnsi + split + React state update causes a long render
cycle that blocks the Node.js event loop, making keyboard shortcuts
(q, p) unresponsive.

Fix: split chunks exceeding 100 lines into batches and schedule each
via setImmediate, yielding to the event loop between renders so Ink's
useInput hook can process keypresses between batches.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Mar 13, 2026

Coverage report

St.
Category Percentage Covered / Total
🟡 Statements 77.25% 14578/18871
🟡 Branches 70.88% 7230/10200
🟡 Functions 76.22% 3702/4857
🟡 Lines 78.74% 13779/17500

Test suite run success

3807 tests passing in 1462 suites.

Report generated by 🧪jest coverage report action from 1afb5c2

Benchmarking with a 3.85MB stack trace (30k lines) shows:
- batch=100: 121ms max event loop block
- batch=20:   44ms max event loop block (vs 27,266ms on main)
- batch=10:   35ms max event loop block

Batch=20 hits the sweet spot: 4x lower max block than 100 with
essentially the same total flush time (~380ms vs ~430ms).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy link
Contributor Author

This stack of pull requests is managed by Graphite. Learn more about stacking.

@alfonso-noriega alfonso-noriega marked this pull request as ready for review March 17, 2026 16:05
@alfonso-noriega alfonso-noriega requested a review from a team as a code owner March 17, 2026 16:05
@github-actions
Copy link
Contributor

We detected some changes at packages/*/src and there are no updates in the .changeset.
If the changes are user-facing, run pnpm changeset add to track your changes and include them in the next release CHANGELOG.

Caution

DO NOT create changesets for features which you do not wish to be included in the public changelog of the next CLI release.

Copy link
Contributor

@ryancbahan ryancbahan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice job! the current approach is in the right direction, but using setImmediate with recursion isn't going to give you backpressure (something node is good at managing), it's just spreading work across event loop ticks. i'm, pretty sure you can leverage node's streaming to split work + auto-support backpressure with something like this:

 const splitter = new Transform({
    transform(chunk, _encoding, callback) {
      const lines = chunk.toString('utf8').replace(/\n$/, '').split('\n')
      for (let i = 0; i < lines.length; i += MAX_LINES_PER_BATCH) {
        this.push(lines.slice(i, i + MAX_LINES_PER_BATCH).join('\n'))
      }
      callback()
    },
  })

  const writable = new Writable({
    write(chunk, _encoding, next) {
      // existing logic, unchanged — always gets small chunks now
      const log = chunk.toString('utf8')
      // ...
      next()
    },
  })

  splitter.pipe(writable)
  // hand `splitter` to the process as its stdout/stderr

that way, when writer hasn't called next() yet, the internal buffer fills up, and node automatically pauses the producer. that will allow us to be a bit more blind to the chunking management in the main flow and allow node to stream accordingly to the local machine's memory limits.

…form+pipe

Replace the manual recursive-setImmediate approach with a proper Node.js
stream pipeline:

- Transform (splitter): reads outputContextStore while still in the
  writer's async context, strips ANSI, and splits large chunks into
  MAX_LINES_PER_BATCH (20) line pieces. Single-batch writes pass through
  unchanged.

- Writable (sink): renders each batch into React state. For large-chunk
  batches setImmediate(next) yields the event loop between renders so
  keyboard shortcuts (q, p) can fire. It also creates real Node.js
  backpressure: when next() is pending the pipe pauses the splitter,
  capping memory use from fast producers without manual bookkeeping.
  Single-batch writes call next() synchronously to preserve existing
  rendering behaviour.

Benchmark (3.85 MB / 30k-line stack trace):
  main:            27,266ms max event loop block
  recursive fix:       44ms max event loop block
  stream fix:          32ms max event loop block

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants